We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes and surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose.
translated by 谷歌翻译
现成的单阶段多人姿势回归方法通常利用实例得分(即,实例定位的置信度)来指示用于选择姿势候选的姿势质量。我们认为现有范式中有两个差距:〜1)实例分数与姿势回归质量不充分相互关联。〜2)实例特征表示,用于预测实例分数,不会明确地编码结构构成信息预测代表姿势回归质量的合理分数。为了解决上述问题,我们建议学习姿势回归质量感知的表现。具体地,对于第一间隙,而不是使用前一个实例置信度标签(例如,离散{1,0}或高斯表示)来表示人类实例的位置和置信度,我们首先介绍一个统一的实例表示(cir)构成回归质量分数的实例和背景到像素明智的评分映射的置信度,以校准实例分数与姿势回归质量之间的不一致。为了填充第二间隙,我们进一步提出了包括KeyPoint查询编码(KQE)的查询编码模块(QEM)来对每个键盘的位置和语义信息和姿态查询编码(PQE)进行编码,该姿势查询编码(PQE)明确地编码预测的结构姿势信息为了更好地拟合一致的实例表示(CIR)。通过使用拟议的组件,我们显着减轻了上述空白。我们的方法优于以前的基于单级回归的甚至自下而上的方法,实现了71.7 AP在MS Coco Test-Dev集上的最先进结果。
translated by 谷歌翻译
多人姿态估计方法通常遵循自上而下和自下而上的范式,两者都可以被认为是两级方法,从而导致高计算成本和低效率。在这篇文章中,向多人姿态估计任务的紧凑且有效的管道迈进,我们建议将人类部位代表为点并提出一种新的身体表示,它利用包括人类中心和七个人部分的自适应点集合以更细粒度的方式代表人类案。新颖的表示更能够捕获各种姿态变形,并自适应地将远程中心到关节位移进行自适应地分解,因此将单级可分子网络传递到更准确的返回多人姿势,称为适应性。对于推理,我们所提出的网络消除了分组以及改进,只需要单步解开过程来形成多人姿势。如果没有任何铃声和吹口哨,我们通过在Coco Test-Dev数据集上实现了DLA-34和71.3%AP / 9.1 FPS的最佳速度准确性折衷67.4%AP / 29.4 FPS。
translated by 谷歌翻译
凝视对象预测(GOP)是一种新的任务,旨在发现人类盯着人物。它具有很大的应用意义,但仍然缺乏统一的解决方案框架。直观的解决方案是将物体检测分支纳入现有的凝视预测方法。然而,先前的凝视预测方法通常使用两种不同的网络来从场景图像和头部图像中提取特征,这将导致繁重的网络架构并防止每个分支联合优化。在本文中,我们构建一个名为Gatector的新框架,以统一的方式解决凝视对象预测问题。特别地,首先提出了一种特定的特定于特定的(SGS)特征提取器以利用共享骨干,以提取场景和头部图像的一般特征。为了更好地考虑输入和任务的特殊性,SGS在共享骨干声间之后引入了共享骨干网之前的两个输入特定块和三个任务特定块。具体地,设计新的Defocus层以在不丢失信息或需要额外计算的情况下生成对象检测任务的对象特征。此外,引入了能量聚集损失以引导凝视热图来集中在凝视的盒子上。最后,我们提出了一种新的MDAP度量,即使在没有重叠区域时也可以揭示盒子之间的差异。 Goo DataSet上的广泛实验验证了我们在所有三个轨道中的方法的优越性,即对象检测,凝视估计和凝视对象预测。
translated by 谷歌翻译
Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-regionbased context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixellevel prediction. The proposed approach achieves state-ofthe-art performance on various datasets. It came first in Im-ageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. With this representation, we can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism which allows us to obtain multiple quantized networks from one full precision source model by progressively mapping the higher precision weights to their adjacent lower precision counterparts. Then, with networks of different bit-widths from one source model, multi-objective optimization is employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. By doing this, the shared weights will be optimized to balance the performance of different quantized models, thus making the weights transferable among different bit widths. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width. Code will be available.
translated by 谷歌翻译
Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocabulary concepts. However, this success cannot be directly transferred to 3D scenarios due to the inaccessibility of large-scale 3D-text pairs. To this end, we propose to distill knowledge encoded in pre-trained vision-language (VL) foundation models through captioning multi-view images from 3D, which allows explicitly associating 3D and semantic-rich captions. Further, to facilitate coarse-to-fine visual-semantic representation learning from captions, we design hierarchical 3D-caption pairs, leveraging geometric constraints between 3D scenes and multi-view images. Finally, by employing contrastive learning, the model learns language-aware embeddings that connect 3D and text for open-vocabulary tasks. Our method not only remarkably outperforms baseline methods by 25.8% $\sim$ 44.7% hIoU and 14.5% $\sim$ 50.4% hAP$_{50}$ on open-vocabulary semantic and instance segmentation, but also shows robust transferability on challenging zero-shot domain transfer tasks. Code will be available at https://github.com/CVMI-Lab/PLA.
translated by 谷歌翻译
Weakly supervised detection of anomalies in surveillance videos is a challenging task. Going beyond existing works that have deficient capabilities to localize anomalies in long videos, we propose a novel glance and focus network to effectively integrate spatial-temporal information for accurate anomaly detection. In addition, we empirically found that existing approaches that use feature magnitudes to represent the degree of anomalies typically ignore the effects of scene variations, and hence result in sub-optimal performance due to the inconsistency of feature magnitudes across scenes. To address this issue, we propose the Feature Amplification Mechanism and a Magnitude Contrastive Loss to enhance the discriminativeness of feature magnitudes for detecting anomalies. Experimental results on two large-scale benchmarks UCF-Crime and XD-Violence manifest that our method outperforms state-of-the-art approaches.
translated by 谷歌翻译
Deep learning has attained remarkable success in many 3D visual recognition tasks, including shape classification, object detection, and semantic segmentation. However, many of these results rely on manually collecting densely annotated real-world 3D data, which is highly time-consuming and expensive to obtain, limiting the scalability of 3D recognition tasks. Thus, we study unsupervised 3D recognition and propose a Self-supervised-Self-Labeled 3D Recognition (SL3D) framework. SL3D simultaneously solves two coupled objectives, i.e., clustering and learning feature representation to generate pseudo-labeled data for unsupervised 3D recognition. SL3D is a generic framework and can be applied to solve different 3D recognition tasks, including classification, object detection, and semantic segmentation. Extensive experiments demonstrate its effectiveness. Code is available at https://github.com/fcendra/sl3d.
translated by 谷歌翻译